111 research outputs found

    Localization and Higher Spin/CFT Dualities

    Get PDF
    Localization is a powerful tool to compute physical quantities such as partition functions, free energies and expectation values of certain operators exactly at any coupling in many supersymmetric theories. Due to this merit, the technique is able to provide highly nontrivial tests of AdS/CFT correspondence. We apply localization procedure to the most general three-dimensional N = 1 Chern-Simons matter theories, which are not studied in the previous localization literature, and show that they can also be formally localized. The other focus in this body of work is the study of an important aspect of high energy physics, the higher spin theories, and their conjectured CFT duals. Higher spin theory is a remarkable extension of Einstein gravity in which mass particles of all spin are described by self-consistent and fully nonlinear field equations. We perform tests of the duality between supersymmetric higher spin theories in AdS4 and the corresponding CFTs, by comparisons of the one loop free energies on both sides. We show that the mismatch between the free energies in the duality between Type-B higher spin theory/fermionic vector model cannot be solved by the introduction of supersymmetry. We then turn to another test of the HS/CFT correspondence, by comparing the tree-level three-point functions on both sides. We produce the full structures of three-point Witten diagrams for both parity-preserving and parity-violating bosonic HS theories, and show that they match perfectly with the corresponding ones on CFT side

    ABJ Quadrality

    Full text link
    We study physical consequences of adding orientifolds to the ABJ triality, which is among 3d N=6 superconformal Chern-Simons theory known as ABJ theory, type IIA string in AdS_4 x CP^3 and N=6 supersymmetric (SUSY) Vasiliev higher spin theory in AdS_4. After adding the orientifolds, it is known that the gauge group of the ABJ theory becomes O(N_1)xUSp(2N_2) while the background of the string theory is replaced by AdS_4 x CP^3/Z_2, and the supersymmetries in the both theories reduce to N=5. We propose that adding the orientifolds to the N=6 Vasiliev theory leads to N=5 SUSY Vasiliev theory. It turns out that the N=5 case is more involved because there are two formulations of the N=5 Vasiliev theory with either O or USp internal symmetry. We show that the two N=5 Vasiliev theories can be understood as certain projections of the N=6 Vasiliev theory, which we identify with the orientifold projections in the Vasiliev theory. We conjecture that the O(N_1)xUSp(2N_2) ABJ theory has the two vector model like limits: N_2 >> N_1 and N_1 >> N_2 which correspond to the semi-classical N=5 Vasiliev theories with O(N_1) and USp(2N_2) internal symmetries respectively. These correspondences together with the standard AdS/CFT correspondence comprise the ABJ quadrality among the N=5 ABJ theory, string/M-theory and two N=5 Vasliev theories. We provide a precise holographic dictionary for the correspondences by comparing correlation functions of stress tensor and flavor currents. Our conjecture is supported by various evidence such as agreements of the spectra, one-loop free energies and SUSY enhancement on the both sides. We also predict the leading free energy of the N=5 Vasiliev theory from the CFT side. As a byproduct, we give a derivation of the relation between the parity violating phase in the N=6 Vasiliev theory and the parameters in the N=6 ABJ theory, which was conjectured in arXiv:1207.4485.Comment: 38+15 pages, 4 figures; v2: minor correction

    Chern-Simons Matter Theories and Higher Spin Gravity

    Full text link
    We compute the parity violating three point amplitudes with one scalar leg in higher spin gravity and compare results with those of Chern-Simons matter theories. The three-point correlators of the free boson, free fermion, critical vector model and Gross-Neveu model are reproduced including the dependence on the Chern-Simons coupling. We also perform a simple test of the modified higher spin equations proposed in arXiv:1605.02662 [hep-th] and find that the results are consistent with the AdS/CFT correspondence.Comment: 39 pages; minor corrections and refs adde

    Thermostat-assisted continuously-tempered Hamiltonian Monte Carlo for Bayesian learning

    Get PDF
    We propose a new sampling method, the thermostat-assisted continuously-tempered Hamiltonian Monte Carlo, for Bayesian learning on large datasets and multimodal distributions. It simulates the Nos\'e-Hoover dynamics of a continuously-tempered Hamiltonian system built on the distribution of interest. A significant advantage of this method is that it is not only able to efficiently draw representative i.i.d. samples when the distribution contains multiple isolated modes, but capable of adaptively neutralising the noise arising from mini-batches and maintaining accurate sampling. While the properties of this method have been studied using synthetic distributions, experiments on three real datasets also demonstrated the gain of performance over several strong baselines with various types of neural networks plunged in

    Interference-aware coordinated power allocation in autonomous Wi-Fi environment

    Full text link
    Self-managed access points (APs) with growing intelligence can optimize their own performances but pose potential negative impacts on others without energy ef ciency. In this paper, we focus on modeling the coordinated interaction among interest-independent and self-con gured APs, and conduct the power allocation case study in the autonomous Wi-Fi scenario. Speci cally, we build a `coordination Wi-Fi platform (CWP), a public platform for APs interacting with each other. OpenWrt-based APs in the physical world are mapped to virtual agents (VAs) in CWP, which communicate with each other through a standard request-reply process de ned as AP talk protocol (ATP).With ATP, an active interference measurement methodology is proposed re ecting both in-range interference and hidden terminal interference, and the Nash bargaining-based power control is further formulated for interference reductions. CWP is deployed in a real of ce environment, where coordination interactions between VAs can bring a maximum 40-Mb/s throughput improvement with the Nash bargaining-based power control in the multi-AP experiments

    Unipolar Double-Star Submodule for Modular Multilevel Converter With DC Fault Blocking Capability

    Get PDF

    Grasp Multiple Objects with One Hand

    Full text link
    The human hand's complex kinematics allow for simultaneous grasping and manipulation of multiple objects, essential for tasks like object transfer and in-hand manipulation. Despite its importance, robotic multi-object grasping remains underexplored and presents challenges in kinematics, dynamics, and object configurations. This paper introduces MultiGrasp, a two-stage method for multi-object grasping on a tabletop with a multi-finger dexterous hand. It involves (i) generating pre-grasp proposals and (ii) executing the grasp and lifting the objects. Experimental results primarily focus on dual-object grasping and report a 44.13% success rate, showcasing adaptability to unseen object configurations and imprecise grasps. The framework also demonstrates the capability to grasp more than two objects, albeit at a reduced inference speed

    MSRL: Distributed Reinforcement Learning with Dataflow Fragments

    Get PDF
    Reinforcement learning (RL) trains many agents, which is resource-intensive and must scale to large GPU clusters. Different RL training algorithms offer different opportunities for distributing and parallelising the computation. Yet, current distributed RL systems tie the definition of RL algorithms to their distributed execution: they hard-code particular distribution strategies and only accelerate specific parts of the computation (e.g. policy network updates) on GPU workers. Fundamentally, current systems lack abstractions that decouple RL algorithms from their execution. We describe MindSpore Reinforcement Learning (MSRL), a distributed RL training system that supports distribution policies that govern how RL training computation is parallelised and distributed on cluster resources, without requiring changes to the algorithm implementation. MSRL introduces the new abstraction of a fragmented dataflow graph, which maps Python functions from an RL algorithm's training loop to parallel computational fragments. Fragments are executed on different devices by translating them to low-level dataflow representations, e.g. computational graphs as supported by deep learning engines, CUDA implementations or multi-threaded CPU processes. We show that MSRL subsumes the distribution strategies of existing systems, while scaling RL training to 64 GPUs

    LIGS: Learnable Intrinsic-Reward Generation Selection for Multi-Agent Learning

    Get PDF
    Efficient exploration is important for reinforcement learners to achieve high rewards. In multi-agent systems, coordinated exploration and behaviour is critical for agents to jointly achieve optimal outcomes. In this paper, we introduce a new general framework for improving coordination and performance of multi-agent reinforcement learners (MARL). Our framework, named Learnable Intrinsic-Reward Generation Selection algorithm (LIGS) introduces an adaptive learner, Generator that observes the agents and learns to construct intrinsic rewards online that coordinate the agents' joint exploration and joint behaviour. Using a novel combination of MARL and switching controls, LIGS determines the best states to learn to add intrinsic rewards which leads to a highly efficient learning process. LIGS can subdivide complex tasks making them easier to solve and enables systems of MARL agents to quickly solve environments with sparse rewards. LIGS can seamlessly adopt existing MARL algorithms and, our theory shows that it ensures convergence to policies that deliver higher system performance. We demonstrate its superior performance in challenging tasks in Foraging and StarCraft II.Comment: arXiv admin note: text overlap with arXiv:2103.0915
    corecore